54 research outputs found

    Detection of Early Signs of Diabetic Retinopathy Based on Textural and Morphological Information in Fundus Images

    Full text link
    [EN] Estimated blind people in the world will exceed 40 million by 2025. To develop novel algorithms based on fundus image descriptors that allow the automatic classification of retinal tissue into healthy and pathological in early stages is necessary. In this paper, we focus on one of the most common pathologies in the current society: diabetic retinopathy. The proposed method avoids the necessity of lesion segmentation or candidate map generation before the classification stage. Local binary patterns and granulometric profiles are locally computed to extract texture and morphological information from retinal images. Different combinations of this information feed classification algorithms to optimally discriminate bright and dark lesions from healthy tissues. Through several experiments, the ability of the proposed system to identify diabetic retinopathy signs is validated using different public databases with a large degree of variability and without image exclusion.This work has been partially supported by the Spanish Ministry of Economy and Competitiveness through project DPI2016-77869 and GVA through project PROMETEO/2019/109Colomer, A.; Igual García, J.; Naranjo Ornedo, V. (2020). Detection of Early Signs of Diabetic Retinopathy Based on Textural and Morphological Information in Fundus Images. Sensors. 20(4):1-20. https://doi.org/10.3390/s20041005S120204World Report on Vision. Technical Report, 2019https://www.who.int/publications-detail/world-report-on-visionFong, D. S., Aiello, L., Gardner, T. W., King, G. L., Blankenship, G., Cavallerano, J. D., … Klein, R. (2003). Retinopathy in Diabetes. Diabetes Care, 27(Supplement 1), S84-S87. doi:10.2337/diacare.27.2007.s84COGAN, D. G. (1961). Retinal Vascular Patterns. Archives of Ophthalmology, 66(3), 366. doi:10.1001/archopht.1961.00960010368014Wilkinson, C. ., Ferris, F. L., Klein, R. E., Lee, P. P., Agardh, C. D., Davis, M., … Verdaguer, J. T. (2003). Proposed international clinical diabetic retinopathy and diabetic macular edema disease severity scales. Ophthalmology, 110(9), 1677-1682. doi:10.1016/s0161-6420(03)00475-5Universal Eye Health: A Global Action Plan 2014–2019. Technical Reporthttps://www.who.int/blindness/actionplan/en/Salamat, N., Missen, M. M. S., & Rashid, A. (2019). Diabetic retinopathy techniques in retinal images: A review. Artificial Intelligence in Medicine, 97, 168-188. doi:10.1016/j.artmed.2018.10.009Qureshi, I., Ma, J., & Shaheed, K. (2019). A Hybrid Proposed Fundus Image Enhancement Framework for Diabetic Retinopathy. Algorithms, 12(1), 14. doi:10.3390/a12010014Morales, S., Engan, K., Naranjo, V., & Colomer, A. (2017). Retinal Disease Screening Through Local Binary Patterns. IEEE Journal of Biomedical and Health Informatics, 21(1), 184-192. doi:10.1109/jbhi.2015.2490798Asiri, N., Hussain, M., Al Adel, F., & Alzaidi, N. (2019). Deep learning based computer-aided diagnosis systems for diabetic retinopathy: A survey. Artificial Intelligence in Medicine, 99, 101701. doi:10.1016/j.artmed.2019.07.009Gulshan, V., Peng, L., Coram, M., Stumpe, M. C., Wu, D., Narayanaswamy, A., … Webster, D. R. (2016). Development and Validation of a Deep Learning Algorithm for Detection of Diabetic Retinopathy in Retinal Fundus Photographs. JAMA, 316(22), 2402. doi:10.1001/jama.2016.17216Prentašić, P., & Lončarić, S. (2016). Detection of exudates in fundus photographs using deep neural networks and anatomical landmark detection fusion. Computer Methods and Programs in Biomedicine, 137, 281-292. doi:10.1016/j.cmpb.2016.09.018Costa, P., Galdran, A., Meyer, M. I., Niemeijer, M., Abramoff, M., Mendonca, A. M., & Campilho, A. (2018). End-to-End Adversarial Retinal Image Synthesis. IEEE Transactions on Medical Imaging, 37(3), 781-791. doi:10.1109/tmi.2017.2759102De la Torre, J., Valls, A., & Puig, D. (2020). A deep learning interpretable classifier for diabetic retinopathy disease grading. Neurocomputing, 396, 465-476. doi:10.1016/j.neucom.2018.07.102Diaz-Pinto, A., Colomer, A., Naranjo, V., Morales, S., Xu, Y., & Frangi, A. F. (2019). Retinal Image Synthesis and Semi-Supervised Learning for Glaucoma Assessment. IEEE Transactions on Medical Imaging, 38(9), 2211-2218. doi:10.1109/tmi.2019.2903434Walter, T., Klein, J., Massin, P., & Erginay, A. (2002). A contribution of image processing to the diagnosis of diabetic retinopathy-detection of exudates in color fundus images of the human retina. IEEE Transactions on Medical Imaging, 21(10), 1236-1243. doi:10.1109/tmi.2002.806290Welfer, D., Scharcanski, J., & Marinho, D. R. (2010). A coarse-to-fine strategy for automatically detecting exudates in color eye fundus images. Computerized Medical Imaging and Graphics, 34(3), 228-235. doi:10.1016/j.compmedimag.2009.10.001Mookiah, M. R. K., Acharya, U. R., Martis, R. J., Chua, C. K., Lim, C. M., Ng, E. Y. K., & Laude, A. (2013). Evolutionary algorithm based classifier parameter tuning for automatic diabetic retinopathy grading: A hybrid feature extraction approach. Knowledge-Based Systems, 39, 9-22. doi:10.1016/j.knosys.2012.09.008Zhang, X., Thibault, G., Decencière, E., Marcotegui, B., Laÿ, B., Danno, R., … Erginay, A. (2014). Exudate detection in color retinal images for mass screening of diabetic retinopathy. Medical Image Analysis, 18(7), 1026-1043. doi:10.1016/j.media.2014.05.004Sopharak, A., Uyyanonvara, B., Barman, S., & Williamson, T. H. (2008). Automatic detection of diabetic retinopathy exudates from non-dilated retinal images using mathematical morphology methods. Computerized Medical Imaging and Graphics, 32(8), 720-727. doi:10.1016/j.compmedimag.2008.08.009Giancardo, L., Meriaudeau, F., Karnowski, T. P., Li, Y., Garg, S., Tobin, K. W., & Chaum, E. (2012). Exudate-based diabetic macular edema detection in fundus images using publicly available datasets. Medical Image Analysis, 16(1), 216-226. doi:10.1016/j.media.2011.07.004Amel, F., Mohammed, M., & Abdelhafid, B. (2012). Improvement of the Hard Exudates Detection Method Used For Computer- Aided Diagnosis of Diabetic Retinopathy. International Journal of Image, Graphics and Signal Processing, 4(4), 19-27. doi:10.5815/ijigsp.2012.04.03Usman Akram, M., Khalid, S., Tariq, A., Khan, S. A., & Azam, F. (2014). Detection and classification of retinal lesions for grading of diabetic retinopathy. Computers in Biology and Medicine, 45, 161-171. doi:10.1016/j.compbiomed.2013.11.014Akram, M. U., Tariq, A., Khan, S. A., & Javed, M. Y. (2014). Automated detection of exudates and macula for grading of diabetic macular edema. Computer Methods and Programs in Biomedicine, 114(2), 141-152. doi:10.1016/j.cmpb.2014.01.010Quellec, G., Lamard, M., Abràmoff, M. D., Decencière, E., Lay, B., Erginay, A., … Cazuguel, G. (2012). A multiple-instance learning framework for diabetic retinopathy screening. Medical Image Analysis, 16(6), 1228-1240. doi:10.1016/j.media.2012.06.003Decencière, E., Cazuguel, G., Zhang, X., Thibault, G., Klein, J.-C., Meyer, F., … Chabouis, A. (2013). TeleOphta: Machine learning and image processing methods for teleophthalmology. IRBM, 34(2), 196-203. doi:10.1016/j.irbm.2013.01.010Abràmoff, M. D., Folk, J. C., Han, D. P., Walker, J. D., Williams, D. F., Russell, S. R., … Niemeijer, M. (2013). Automated Analysis of Retinal Images for Detection of Referable Diabetic Retinopathy. JAMA Ophthalmology, 131(3), 351. doi:10.1001/jamaophthalmol.2013.1743Almotiri, J., Elleithy, K., & Elleithy, A. (2018). Retinal Vessels Segmentation Techniques and Algorithms: A Survey. Applied Sciences, 8(2), 155. doi:10.3390/app8020155Thakur, N., & Juneja, M. (2018). Survey on segmentation and classification approaches of optic cup and optic disc for diagnosis of glaucoma. Biomedical Signal Processing and Control, 42, 162-189. doi:10.1016/j.bspc.2018.01.014Bertalmio, M., Sapiro, G., Caselles, V., & Ballester, C. (2000). Image inpainting. Proceedings of the 27th annual conference on Computer graphics and interactive techniques - SIGGRAPH ’00. doi:10.1145/344779.344972Qureshi, M. A., Deriche, M., Beghdadi, A., & Amin, A. (2017). A critical survey of state-of-the-art image inpainting quality assessment metrics. Journal of Visual Communication and Image Representation, 49, 177-191. doi:10.1016/j.jvcir.2017.09.006Colomer, A., Naranjo, V., Engan, K., & Skretting, K. (2017). Assessment of sparse-based inpainting for retinal vessel removal. Signal Processing: Image Communication, 59, 73-82. doi:10.1016/j.image.2017.03.018Morales, S., Naranjo, V., Angulo, J., & Alcaniz, M. (2013). Automatic Detection of Optic Disc Based on PCA and Mathematical Morphology. IEEE Transactions on Medical Imaging, 32(4), 786-796. doi:10.1109/tmi.2013.2238244Ojala, T., Pietikäinen, M., & Harwood, D. (1996). A comparative study of texture measures with classification based on featured distributions. Pattern Recognition, 29(1), 51-59. doi:10.1016/0031-3203(95)00067-4Ojala, T., Pietikainen, M., & Maenpaa, T. (2002). Multiresolution gray-scale and rotation invariant texture classification with local binary patterns. IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(7), 971-987. doi:10.1109/tpami.2002.1017623Breiman, L. (2001). Machine Learning, 45(1), 5-32. doi:10.1023/a:1010933404324Chang, C.-C., & Lin, C.-J. (2011). LIBSVM. ACM Transactions on Intelligent Systems and Technology, 2(3), 1-27. doi:10.1145/1961189.1961199Tapia, S. L., Molina, R., & de la Blanca, N. P. (2016). Detection and localization of objects in Passive Millimeter Wave Images. 2016 24th European Signal Processing Conference (EUSIPCO). doi:10.1109/eusipco.2016.7760619Jin Huang, & Ling, C. X. (2005). Using AUC and accuracy in evaluating learning algorithms. IEEE Transactions on Knowledge and Data Engineering, 17(3), 299-310. doi:10.1109/tkde.2005.50Prati, R. C., Batista, G. E. A. P. A., & Monard, M. C. (2011). A Survey on Graphical Methods for Classification Predictive Performance Evaluation. IEEE Transactions on Knowledge and Data Engineering, 23(11), 1601-1618. doi:10.1109/tkde.2011.59Mandrekar, J. N. (2010). Receiver Operating Characteristic Curve in Diagnostic Test Assessment. Journal of Thoracic Oncology, 5(9), 1315-1316. doi:10.1097/jto.0b013e3181ec173dRocha, A., Carvalho, T., Jelinek, H. F., Goldenstein, S., & Wainer, J. (2012). Points of Interest and Visual Dictionaries for Automatic Retinal Lesion Detection. IEEE Transactions on Biomedical Engineering, 59(8), 2244-2253. doi:10.1109/tbme.2012.2201717Júnior, S. B., & Welfer, D. (2013). Automatic Detection of Microaneurysms and Hemorrhages in Color Eye Fundus Images. International Journal of Computer Science and Information Technology, 5(5), 21-37. doi:10.5121/ijcsit.2013.550

    Assessment of sparse-based inpainting for retinal vessel removal

    Full text link
    [EN] Some important eye diseases, like macular degeneration or diabetic retinopathy, can induce changes visible on the retina, for example as lesions. Segmentation of lesions or extraction of textural features from the fundus images are possible steps towards automatic detection of such diseases which could facilitate screening as well as provide support for clinicians. For the task of detecting significant features, retinal blood vessels are considered as being interference on the retinal images. If these blood vessel structures could be suppressed, it might lead to a more accurate segmentation of retinal lesions as well as a better extraction of textural features to be used for pathology detection. This work proposes the use of sparse representations and dictionary learning techniques for retinal vessel inpainting. The performance of the algorithm is tested for greyscale and RGB images from the DRIVE and STARE public databases, employing different neighbourhoods and sparseness factors. Moreover, a comparison with the most common inpainting family, diffusion-based methods, is carried out. For this purpose, two different ways of assessing the quality of the inpainting are presented and used to evaluate the results of the non-artificial inpainting, i.e. where a reference image does not exist. The results suggest that the use of sparse-based inpainting performs very well for retinal blood vessels removal which will be useful for the future detection and classification of eye diseases. (C) 2017 Elsevier B.V. All rights reserved.This work was supported by NILS Science and Sustainability Programme (014-ABEL-IM-2013) and by the Ministerio de Economia y Competitividad of Spain, Project ACRIMA (TIN2013-46751-R). The work of Adrian Colomer has been supported by the Spanish Government under the FPI Grant BES-2014-067889.Colomer, A.; Naranjo Ornedo, V.; Engan, K.; Skretting, K. (2017). Assessment of sparse-based inpainting for retinal vessel removal. Signal Processing: Image Communication. 59:73-82. https://doi.org/10.1016/j.image.2017.03.018S73825

    Retinal Disease Screening through Local Binary Patterns

    Full text link
    © 2015 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.”This work investigates discrimination capabilities in the texture of fundus images to differentiate between pathological and healthy images. For this purpose, the performance of Local Binary Patterns (LBP) as a texture descriptor for retinal images has been explored and compared with other descriptors such as LBP filtering (LBPF) and local phase quantization (LPQ). The goal is to distinguish between diabetic retinopathy (DR), agerelated macular degeneration (AMD) and normal fundus images analysing the texture of the retina background and avoiding a previous lesion segmentation stage. Five experiments (separating DR from normal, AMD from normal, pathological from normal, DR from AMD and the three different classes) were designed and validated with the proposed procedure obtaining promising results. For each experiment, several classifiers were tested. An average sensitivity and specificity higher than 0.86 in all the cases and almost of 1 and 0.99, respectively, for AMD detection were achieved. These results suggest that the method presented in this paper is a robust algorithm for describing retina texture and can be useful in a diagnosis aid system for retinal disease screening.This work was supported by NILS Science and Sustainability Programme (010-ABEL-IM-2013) and by the Ministerio de Economia y Competitividad of Spain, Project ACRIMA (TIN2013-46751-R). The work of A. Colomer was supported by the Spanish Government under the FPI Grant BES-2014-067889.Morales, S.; Engan, K.; Naranjo Ornedo, V.; Colomer, A. (2015). Retinal Disease Screening through Local Binary Patterns. IEEE Journal of Biomedical and Health Informatics. (99):1-8. https://doi.org/10.1109/JBHI.2015.2490798S189

    Glaucoma Detection from Raw SD-OCT Volumes: a Novel Approach Focused on Spatial Dependencies

    Full text link
    [EN] Background and objective:Glaucoma is the leading cause of blindness worldwide. Many studies based on fundus image and optical coherence tomography (OCT) imaging have been developed in the literature to help ophthalmologists through artificial-intelligence techniques. Currently, 3D spectral-domain optical coherence tomography (SD-OCT) samples have become more important since they could enclose promising information for glaucoma detection. To analyse the hidden knowledge of the 3D scans for glaucoma detection, we have proposed, for the first time, a deep-learning methodology based on leveraging the spatial dependencies of the features extracted from the B-scans. Methods:The experiments were performed on a database composed of 176 healthy and 144 glaucomatous SD-OCT volumes centred on the optic nerve head (ONH). The proposed methodology consists of two well-differentiated training stages: a slide-level feature extractor and a volume-based predictive model. The slide-level discriminator is characterised by two new, residual and attention, convolutional modules which are combined via skip-connections with other fine-tuned architectures. Regarding the second stage, we first carried out a data-volume conditioning before extracting the features from the slides of the SD-OCT volumes. Then, Long Short-Term Memory (LSTM) networks were used to combine the recurrent dependencies embedded in the latent space to provide a holistic feature vector, which was generated by the proposed sequential-weighting module (SWM). Results:The feature extractor reports AUC values higher than 0.93 both in the primary and external test sets. Otherwise, the proposed end-to-end system based on a combination of CNN and LSTM networks achieves an AUC of 0.8847 in the prediction stage, which outperforms other state-of-the-art approaches intended for glaucoma detection. Additionally, Class Activation Maps (CAMs) were computed to highlight the most interesting regions per B-scan when discerning between healthy and glaucomatous eyes from raw SD-OCT volumes. Conclusions:The proposed model is able to extract the features from the B-scans of the volumes and combine the information of the latent space to perform a volume-level glaucoma prediction. Our model, which combines residual and attention blocks with a sequential weighting module to refine the LSTM outputs, surpass the results achieved from current state-of-the-art methods focused on 3D deep-learning architectures.The authors gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan V GPU used here.This work has been funded by GALAHAD project [H2020-ICT-2016-2017, 732613], SICAP project (DPI2016-77869-C2-1-R) and GVA through project PROMETEO/2019/109. The work of Gabriel García has been supported by the State Research Spanish Agency PTA2017-14610-I.García-Pardo, JG.; Colomer, A.; Naranjo Ornedo, V. (2021). Glaucoma Detection from Raw SD-OCT Volumes: a Novel Approach Focused on Spatial Dependencies. Computer Methods and Programs in Biomedicine. 200:1-16. https://doi.org/10.1016/j.cmpb.2020.105855S116200Weinreb, R. N., & Khaw, P. T. (2004). Primary open-angle glaucoma. The Lancet, 363(9422), 1711-1720. doi:10.1016/s0140-6736(04)16257-0Jonas, J. B., Aung, T., Bourne, R. R., Bron, A. M., Ritch, R., & Panda-Jonas, S. (2018). Glaucoma – Authors’ reply. The Lancet, 391(10122), 740. doi:10.1016/s0140-6736(18)30305-2Tham, Y.-C., Li, X., Wong, T. Y., Quigley, H. A., Aung, T., & Cheng, C.-Y. (2014). Global Prevalence of Glaucoma and Projections of Glaucoma Burden through 2040. Ophthalmology, 121(11), 2081-2090. doi:10.1016/j.ophtha.2014.05.013Huang, D., Swanson, E. A., Lin, C. P., Schuman, J. S., Stinson, W. G., Chang, W., … Fujimoto, J. G. (1991). Optical Coherence Tomography. Science, 254(5035), 1178-1181. doi:10.1126/science.1957169Medeiros, F. A., Zangwill, L. M., Alencar, L. M., Bowd, C., Sample, P. A., Susanna, R., & Weinreb, R. N. (2009). Detection of Glaucoma Progression with Stratus OCT Retinal Nerve Fiber Layer, Optic Nerve Head, and Macular Thickness Measurements. Investigative Opthalmology & Visual Science, 50(12), 5741. doi:10.1167/iovs.09-3715Sinthanayothin, C., Boyce, J. F., Williamson, T. H., Cook, H. L., Mensah, E., Lal, S., & Usher, D. (2002). Automated detection of diabetic retinopathy on digital fundus images. Diabetic Medicine, 19(2), 105-112. doi:10.1046/j.1464-5491.2002.00613.xWalter, T., Massin, P., Erginay, A., Ordonez, R., Jeulin, C., & Klein, J.-C. (2007). Automatic detection of microaneurysms in color fundus images. Medical Image Analysis, 11(6), 555-566. doi:10.1016/j.media.2007.05.001Diaz-Pinto, A., Colomer, A., Naranjo, V., Morales, S., Xu, Y., & Frangi, A. F. (2019). Retinal Image Synthesis and Semi-Supervised Learning for Glaucoma Assessment. IEEE Transactions on Medical Imaging, 38(9), 2211-2218. doi:10.1109/tmi.2019.2903434Bussel, I. I., Wollstein, G., & Schuman, J. S. (2013). OCT for glaucoma diagnosis, screening and detection of glaucoma progression. British Journal of Ophthalmology, 98(Suppl 2), ii15-ii19. doi:10.1136/bjophthalmol-2013-304326Varma, R., Steinmann, W. C., & Scott, I. U. (1992). Expert Agreement in Evaluating the Optic Disc for Glaucoma. Ophthalmology, 99(2), 215-221. doi:10.1016/s0161-6420(92)31990-6Jaffe, G. J., & Caprioli, J. (2004). Optical coherence tomography to detect and manage retinal disease and glaucoma. American Journal of Ophthalmology, 137(1), 156-169. doi:10.1016/s0002-9394(03)00792-xHood, D. C., & Raza, A. S. (2014). On improving the use of OCT imaging for detecting glaucomatous damage. British Journal of Ophthalmology, 98(Suppl 2), ii1-ii9. doi:10.1136/bjophthalmol-2014-305156Bizios, D., Heijl, A., Hougaard, J. L., & Bengtsson, B. (2010). Machine learning classifiers for glaucoma diagnosis based on classification of retinal nerve fibre layer thickness parameters measured by Stratus OCT. Acta Ophthalmologica, 88(1), 44-52. doi:10.1111/j.1755-3768.2009.01784.xKim, S. J., Cho, K. J., & Oh, S. (2017). Development of machine learning models for diagnosis of glaucoma. PLOS ONE, 12(5), e0177726. doi:10.1371/journal.pone.0177726Medeiros, F. A., Jammal, A. A., & Thompson, A. C. (2019). From Machine to Machine. Ophthalmology, 126(4), 513-521. doi:10.1016/j.ophtha.2018.12.033An, G., Omodaka, K., Hashimoto, K., Tsuda, S., Shiga, Y., Takada, N., … Nakazawa, T. (2019). Glaucoma Diagnosis with Machine Learning Based on Optical Coherence Tomography and Color Fundus Images. Journal of Healthcare Engineering, 2019, 1-9. doi:10.1155/2019/4061313Fang, L., Cunefare, D., Wang, C., Guymer, R. H., Li, S., & Farsiu, S. (2017). Automatic segmentation of nine retinal layer boundaries in OCT images of non-exudative AMD patients using deep learning and graph search. Biomedical Optics Express, 8(5), 2732. doi:10.1364/boe.8.002732Pekala, M., Joshi, N., Liu, T. Y. A., Bressler, N. M., DeBuc, D. C., & Burlina, P. (2019). Deep learning based retinal OCT segmentation. Computers in Biology and Medicine, 114, 103445. doi:10.1016/j.compbiomed.2019.103445Barella, K. A., Costa, V. P., Gonçalves Vidotti, V., Silva, F. R., Dias, M., & Gomi, E. S. (2013). Glaucoma Diagnostic Accuracy of Machine Learning Classifiers Using Retinal Nerve Fiber Layer and Optic Nerve Data from SD-OCT. Journal of Ophthalmology, 2013, 1-7. doi:10.1155/2013/789129Vidotti, V. G., Costa, V. P., Silva, F. R., Resende, G. M., Cremasco, F., Dias, M., & Gomi, E. S. (2013). Sensitivity and Specificity of Machine Learning Classifiers and Spectral Domain OCT for the Diagnosis of Glaucoma. European Journal of Ophthalmology, 23(1), 61-69. doi:10.5301/ejo.5000183Xu, J., Ishikawa, H., Wollstein, G., Bilonick, R. A., Folio, L. S., Nadler, Z., … Schuman, J. S. (2013). Three-Dimensional Spectral-Domain Optical Coherence Tomography Data Analysis for Glaucoma Detection. PLoS ONE, 8(2), e55476. doi:10.1371/journal.pone.0055476Maetschke, S., Antony, B., Ishikawa, H., Wollstein, G., Schuman, J., & Garnavi, R. (2019). A feature agnostic approach for glaucoma detection in OCT volumes. PLOS ONE, 14(7), e0219126. doi:10.1371/journal.pone.0219126Ran, A. R., Cheung, C. Y., Wang, X., Chen, H., Luo, L., Chan, P. P., … Tham, C. C. (2019). Detection of glaucomatous optic neuropathy with spectral-domain optical coherence tomography: a retrospective training and validation deep-learning analysis. The Lancet Digital Health, 1(4), e172-e182. doi:10.1016/s2589-7500(19)30085-8De Fauw, J., Ledsam, J. R., Romera-Paredes, B., Nikolov, S., Tomasev, N., Blackwell, S., … Ronneberger, O. (2018). Clinically applicable deep learning for diagnosis and referral in retinal disease. Nature Medicine, 24(9), 1342-1350. doi:10.1038/s41591-018-0107-6Wang, X., Chen, H., Ran, A.-R., Luo, L., Chan, P. P., Tham, C. C., … Heng, P.-A. (2020). Towards multi-center glaucoma OCT image screening with semi-supervised joint structure and function multi-task learning. Medical Image Analysis, 63, 101695. doi:10.1016/j.media.2020.101695Ran, A. R., Shi, J., Ngai, A. K., Chan, W.-Y., Chan, P. P., Young, A. L., … Cheung, C. Y. (2019). Artificial intelligence deep learning algorithm for discriminating ungradable optical coherence tomography three-dimensional volumetric optic disc scans. Neurophotonics, 6(04), 1. doi:10.1117/1.nph.6.4.041110Hochreiter, S., & Schmidhuber, J. (1997). Long Short-Term Memory. Neural Computation, 9(8), 1735-1780. doi:10.1162/neco.1997.9.8.1735Jiang, J., Liu, X., Liu, L., Wang, S., Long, E., Yang, H., … Lin, H. (2018). Predicting the progression of ophthalmic disease based on slit-lamp images using a deep temporal sequence network. PLOS ONE, 13(7), e0201142. doi:10.1371/journal.pone.0201142Tajbakhsh, N., Shin, J. Y., Gurudu, S. R., Hurst, R. T., Kendall, C. B., Gotway, M. B., & Liang, J. (2016). Convolutional Neural Networks for Medical Image Analysis: Full Training or Fine Tuning? IEEE Transactions on Medical Imaging, 35(5), 1299-1312. doi:10.1109/tmi.2016.2535302Graves, A., Liwicki, M., Fernandez, S., Bertolami, R., Bunke, H., & Schmidhuber, J. (2009). A Novel Connectionist System for Unconstrained Handwriting Recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 31(5), 855-868. doi:10.1109/tpami.2008.13

    Self-learning for weakly supervised Gleason grading of local patterns

    Full text link
    © 2021 IEEE. Personal use of this material is permitted. Permissíon from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertisíng or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.[EN] Prostate cancer is one of the main diseases affecting men worldwide. The gold standard for diagnosis and prognosis is the Gleason grading system. In this process, pathologists manually analyze prostate histology slides under microscope, in a high time-consuming and subjective task. In the last years, computer-aided-diagnosis (CAD) systems have emerged as a promising tool that could support pathologists in the daily clinical practice. Nevertheless, these systems are usually trained using tedious and prone-to-error pixel-level annotations of Gleason grades in the tissue. To alleviate the need of manual pixel-wise labeling, just a handful of works have been presented in the literature. Furthermore, despite the promising results achieved on global scoring the location of cancerous patterns in the tissue is only qualitatively addressed. These heatmaps of tumor regions, however, are crucial to the reliability of CAD systems as they provide explainability to the system's output and give confidence to pathologists that the model is focusing on medical relevant features. Motivated by this, we propose a novel weakly-supervised deeplearning model, based on self-learning CNNs, that leverages only the global Gleason score of gigapixel whole slide images during training to accurately perform both, grading of patch-level patterns and biopsy-level scoring. To evaluate the performance of the proposed method, we perform extensive experiments on three different external datasets for the patch-level Gleason grading, and on two different test sets for global Grade Group prediction. We empirically demonstrate that our approach outperforms its supervised counterpart on patch-level Gleason grading by a large margin, as well as state-of-the-art methods on global biopsylevel scoring. Particularly, the proposed model brings an average improvement on the Cohen's quadratic kappa (kappa) score of nearly 18% compared to full-supervision for the patch-level Gleason grading task. This suggests that the absence of the annotator's bias in our approach and the capability of using large weakly labeled datasets during training leads to higher performing and more robust models. Furthermore, raw features obtained from the patchlevel classifier showed to generalize better than previous approaches in the literature to the subjective global biopsylevel scoring.This work was supported by the Spanish Ministry of Economy and Competitiveness through Projects DPI2016-77869 and PID2019-105142RB-C21.Silva-Rodríguez, J.; Colomer, A.; Dolz, J.; Naranjo Ornedo, V. (2021). Self-learning for weakly supervised Gleason grading of local patterns. IEEE Journal of Biomedical and Health Informatics. 25(8):3094-3104. https://doi.org/10.1109/JBHI.2021.3061457S3094310425

    Efficient Cancer Classification by Coupling Semi Supervised and Multiple Instance Learning

    Full text link
    [EN] The annotation of large datasets is often the bottleneck in the successful application of artificial intelligence in computational pathology. For this reason recently Multiple Instance Learning (MIL) and Semi Supervised Learning (SSL) approaches are gaining popularity because they require fewer annotations. In this work we couple SSL and MIL to train a deep learning classifier that combines the advantages of both methods and overcomes their limitations. Our method is able to learn from the global WSI diagnosis and a combination of labeled and unlabeled patches. Furthermore, we propose and evaluate an efficient labeling paradigm that guarantees a strong classification performance when combined with our learning framework. We compare our method to SSL and MIL baselines, the state-of-the-art and completely supervised training. With only a small percentage of patch labels our proposed model achieves a competitive performance on SICAPv2 (Cohen's kappa of 0.801 with 450 patch labels), PANDA (Cohen's kappa of 0.794 with 22,023 patch labels) and Camelyon16 (ROC AUC of 0.913 with 433 patch labels). Our code is publicly available at https://github.com/arneschmidt/ssl_and_mil_cancer_classification.This work was supported in part by the European Union's Horizon 2020 Research and Innovation Program through the Marie Skodowska Curie (Cloud Artificial Intelligence For pathologY (CLARIFY) Project) under Grant 860627, and in part by the Spanish Ministry of Science and Innovation under Project PID2019-105142RB-C22.Schmidt, A.; Silva-RodrĂ­guez, J.; Molina, R.; Naranjo Ornedo, V. (2022). Efficient Cancer Classification by Coupling Semi Supervised and Multiple Instance Learning. IEEE Access. 10:9763-9773. https://doi.org/10.1109/ACCESS.2022.3143345976397731

    Evaluation of fractal dimension effectiveness for damage detection in retinal background

    Full text link
    [EN] This work investigates the characterization of bright lesions in retinal fundus images using texture analysis techniques. Exudates and drusen are evidences of retinal damage in diabetic retinopathy (DR) and age-related macular degeneration (AMD) respectively. An automatic detection of pathological tissues could make possible an early detection of these diseases. In this work, fractal analysis is explored in order to discriminate between pathological and healthy retinal texture. After a deep preprocessing step, in which spatial and colour normalization are performed, the fractal dimension is extracted locally by computing the Hurst exponent (H) along different directions. The greyscale image is described by the increments of the fractional Brownian motion model and the H parameter is computed by linear regression in the frequency domain. The ability of fractal dimension to detect pathological tissues is demonstrated using a home-made system, based on fractal analysis and Support Vector Machine, able to achieve around a 70% and 83% of accuracy in E-OPHTHA and DIARETDB1 public databases respectively. In a second experiment, the fractal descriptor is combined with texture information, extracted by the Local Binary Patterns, improving the bright lesion detection. Accuracy, sensitivity and specificity values higher than 89%, 80% and 90% respectively suggest that the method presented in this paper is a robust algorithm for describing retina texture and can be useful in the automatic detection of DR and AMD.This paper was supported by the European Union's Horizon 2020 research and innovation programme under the Project GALAHAD [H2020-ICT-2016-2017, 732613]. In addition, this work was partially funded by the Ministerio de Economia y Competitividad of Spain, Project SICAP [DPI2016-77869-C2-1-R]. The work of Adrian Colomer has been supported by the Spanish Government under a FPI Grant [BES-2014-067889]. We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan Xp GPU used for this research.Colomer, A.; Naranjo Ornedo, V.; Janvier, T.; Mossi GarcĂ­a, JM. (2018). Evaluation of fractal dimension effectiveness for damage detection in retinal background. Journal of Computational and Applied Mathematics. 337:341-353. https://doi.org/10.1016/j.cam.2018.01.005S34135333

    Computer-Aided Diagnosis Software for Hypertensive Risk Determination Through Fundus Image Processing

    Full text link
    "(c) 20xx IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other users, including reprinting/ republishing this material for advertising or promotional purposes, creating new collective works for resale or redistribution to servers or lists, or reuse of any copyrighted components of this work in other works."The goal of the software proposed in this paper is to assist ophthalmologists in diagnosis and disease prevention, helping them to determine cardiovascular risk or other diseases where the vessels can be altered, as well as to monitor the pathology progression and response to different treatments. The performance of the tool has been evaluated by means of a double-blind study where its sensitivity, specificity, and reproducibility to discriminate between health fundus (without cardiovascular risk) and hypertensive patients has been calculated in contrast to an expert ophthalmologist opinion obtained through a visual inspection of the fundus image. An improvement of almost 20% has been achieved comparing the system results with the clinical visual classification.This work was supported in part by Ministerio de Economia y Competitividad of Spain, Project ACRIMA (TIN2013-46751-R) and partially by the Projects Consolider-C (SEJ2006 14301/PSIC), CIBER of Physiopathology of Obesity and Nutrition, an initiative of ISCIII, and the Excellence Research Program PROMETEO (Generalitat Valenciana. Conselleria de Educacion, 2008157).Morales Martínez, S.; Naranjo Ornedo, V.; Navea, A.; Alcañiz Raya, ML. (2014). Computer-Aided Diagnosis Software for Hypertensive Risk Determination Through Fundus Image Processing. IEEE Journal of Biomedical and Health Informatics. 18(6):1757-1763. https://doi.org/10.1109/JBHI.2014.2337960S1757176318

    Automatic evaluation of degree of cleanliness in capsule endoscopy based on a novel CNN architecture

    Full text link
    [EN] Capsule endoscopy (CE) is a widely used, minimally invasive alternative to traditional endoscopy that allows visualisation of the entire small intestine. Patient preparation can help to obtain a cleaner intestine and thus better visibility in the resulting videos. However, studies on the most effective preparation method are conflicting due to the absence of objective, automatic cleanliness evaluation methods. In this work, we aim to provide such a method capable of presenting results on an intuitive scale, with a relatively light-weight novel convolutional neural network architecture at its core. We trained our model using 5-fold cross-validation on an extensive data set of over 50,000 image patches, collected from 35 different CE procedures, and compared it with state-of-the-art classification methods. From the patch classification results, we developed a method to automatically estimate pixel-level probabilities and deduce cleanliness evaluation scores through automatically learnt thresholds. We then validated our method in a clinical setting on 30 newly collected CE videos, comparing the resulting scores to those independently assigned by human specialists. We obtained the highest classification accuracy for the proposed method (95.23%), with significantly lower average prediction times than for the second-best method. In the validation of our method, we found acceptable agreement with two human specialists compared to interhuman agreement, showing its validity as an objective evaluation method.This work was funded by the European Union's H2020: MSCA: ITN program for the "Wireless In-body Environment Communication - WiBEC" project under the grant agreement no. 675353. Additionally, we gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan V GPU used for this research. Figures 2 and 3 were drawn by the authors.Noorda, R.; Nevárez, A.; Colomer, A.; Pons Beltrán, V.; Naranjo Ornedo, V. (2020). Automatic evaluation of degree of cleanliness in capsule endoscopy based on a novel CNN architecture. Scientific Reports. 10(1):1-13. https://doi.org/10.1038/s41598-020-74668-8S113101Pons Beltrán, V. et al. Evaluation of different bowel preparations for small bowel capsule endoscopy: a prospective, randomized, controlled study. Dig. Dis. Sci. 56, 2900–2905. https://doi.org/10.1007/s10620-011-1693-z (2011).Klein, A., Gizbar, M., Bourke, M. J. & Ahlenstiel, G. Validated computed cleansing score for video capsule endoscopy. Dig. Endosc. 28, 564–569. https://doi.org/10.1111/den.12599 (2016).Vilarino, F., Spyridonos, P., Pujol, O., Vitria, J. & Radeva, P. Automatic detection of intestinal juices in wireless capsule video endoscopy. In 18th International Conference on Pattern Recognition (ICPR’06), Vol. 4, 719–722, https://doi.org/10.1109/ICPR.2006.296 (2006).Wang, Q. et al. Reduction of bubble-like frames using a rss filter in wireless capsule endoscopy video. Opt. Laser Technol. 110, 152–157. https://doi.org/10.1016/j.optlastec.2018.08.051 (2019).Mewes, P. W. et al. Automatic region-of-interest segmentation and pathology detection in magnetically guided capsule endoscopy. In International Conference on Medical Image Computing and Computer-Assisted Intervention 141–148, https://doi.org/10.1007/978-3-642-23626-6_18 (Springer 2011).Bashar, M. K., Mori, K., Suenaga, Y., Kitasaka, T. & Mekada, Y. Detecting informative frames from wireless capsule endoscopic video using color and texture features. In Medical Image Computing and Computer-Assisted Intervention (MICCAI 2008), 603–610, https://doi.org/10.1007/978-3-540-85990-1_72 (Springer, Berlin, 2008).Sun, Z., Li, B., Zhou, R., Zheng, H. & Meng, M. Q. H. Removal of non-informative frames for wireless capsule endoscopy video segmentation. In 2012 IEEE International Conference on Automation and Logistics, 294–299, https://doi.org/10.1109/ICAL.2012.6308214 (2012).Khun, P. C., Zhuo, Z., Yang, L. Z., Liyuan, L. & Jiang, L. Feature selection and classification for wireless capsule endoscopic frames. In 2009 International Conference on Biomedical and Pharmaceutical Engineering, 1–6, https://doi.org/10.1109/ICBPE.2009.5384106 (2009).Segui, S. et al. Categorization and segmentation of intestinal content frames for wireless capsule endoscopy. IEEE Trans. Inf Technol. Biomed. 16, 1341–1352. https://doi.org/10.1109/TITB.2012.2221472 (2012).Maghsoudi, O. H., Talebpour, A., Soltanian-Zadeh, H., Alizadeh, M. & Soleimani, H. A. Informative and uninformative regions detection in wce frames. J. Adv. Comput. 3, 12–34. https://doi.org/10.7726/jac.2014.1002a (2014).Noorda, R., Nevarez, A., Colomer, A., Naranjo, V. & Pons, V. Automatic detection of intestinal content to evaluate visibility in capsule endoscopy. In 13th13^{th}International Symposium on Medical Information and Communication Technology (ISMICT 2019) (Oslo, Norway, 2019).Andrearczyk, V. & Whelan, P. F. Deep learning in texture analysis and its application to tissue image classification. In Biomedical Texture Analysis (eds Depeursinge, A. et al.) 95–129 (Elsevier, Amsterdam, 2017). https://doi.org/10.1016/B978-0-12-812133-7.00004-1.Werbos, P. J. et al. Backpropagation through time: what it does and how to do it. Proc. IEEE 78, 1550–1560. https://doi.org/10.1109/5.58337 (1990).Jia, X. & Meng, M. Q.-H. A deep convolutional neural network for bleeding detection in wireless capsule endoscopy images. In 2016 38th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), 639–642, https://doi.org/10.1109/EMBC.2016.7590783 (IEEE, 2016).Simonyan, K. & Zisserman, A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. https://doi.org/10.1109/ACPR.2015.7486599(2014).Springenberg, J. T., Dosovitskiy, A., Brox, T. & Riedmiller, M. Striving for simplicity: the all convolutional net. arXiv preprint arXiv:1412.6806 (2014).Chollet, F. et al. Keras (2015). Software available from keras.io.Abadi, M. et al. TensorFlow: large-scale machine learning on heterogeneous systems (2015). Software available from tensorflow.org.Beltrán, V. P., Carretero, C., Gonzalez-Suárez, B., Fernández-Urien, I. & Muñoz-Navas, M. Intestinal preparation prior to capsule endoscopy administration. World J. Gastroenterol. 14, 5773. https://doi.org/10.3748/wjg.14.5773 (2008).Koo, T. K. & Li, M. Y. A guideline of selecting and reporting intraclass correlation coefficients for reliability research. J. Chiropr. Med. 15, 155–163. https://doi.org/10.1016/j.jcm.2016.02.012 (2016).Cohen, J. Weighted kappa: nominal scale agreement provision for scaled disagreement or partial credit. Psychol. Bull. 70, 213. https://doi.org/10.1037/h0026256 (1968).Warrens, M. J. Conditional inequalities between Cohens kappa and weighted kappas. Stat. Methodol. 10, 14–22. https://doi.org/10.1016/j.stamet.2012.05.004 (2013).Sim, J. & Wright, C. C. The kappa statistic in reliability studies: use, interpretation, and sample size requirements. Phys. Ther. 85, 257–268. https://doi.org/10.1093/ptj/85.3.257 (2005).Cardillo, G. Cohen’s kappa. https://www.github.com/dnafinder/Cohen (2020)

    Jaw tissues segmentation in dental 3D CT images using fuzzy-connectedness and morphological processing

    Full text link
    The success of oral surgery is subject to accurate advanced planning. In order to properly plan for dental surgery or a suitable implant placement, it is necessary an accurate segmentation of the jaw tissues: the teeth, the cortical bone, the trabecular core and over all, the inferior alveolar nerve. This manuscript presents a new automatic method that is based on fuzzy connectedness object extraction and mathematical morphology processing. The method uses computed tomography data to extract different views of the jaw: a pseudo-orthopantomographic view to estimate the path of the nerve and cross-sectional views to segment the jaw tissues. The method has been tested in a groundtruth set consisting of more than 9000 cross-sections from 20 different patients and has been evaluated using four similarity indicators (the Jaccard index, Dice's coefficient, point-to-point and point-to-curve distances), achieving promising results in all of them (0.726 ± 0.031, 0.840 ± 0.019, 0.144 ± 0.023 mm and 0.163 ± 0.025 mm, respectively). The method has proven to be significantly automated and accurate, with errors around 5% (of the diameter of the nerve), and is easily integrable in current dental planning systems. © 2012 Elsevier Ireland Ltd.This work has been supported by the project MIRACLE (DPI2007-66782-C03-01-AR07) of Spanish Ministerio de Educacion y Ciencia.Llorens Rodríguez, R.; Naranjo Ornedo, V.; López-Mir, F.; Alcañiz Raya, ML. (2012). Jaw tissues segmentation in dental 3D CT images using fuzzy-connectedness and morphological processing. Computer Methods and Programs in Biomedicine. 108(2):832-843. https://doi.org/10.1016/j.cmpb.2012.05.014832843108
    • …
    corecore